5 research outputs found

    The irreducible vectors of a lattice:Some theory and applications

    Get PDF
    The main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of Voronoi relevant vectors and study its properties. For extremal lattices this set may contain as many as 2^n vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upperbounded by the kissing number. One of our main results shows thatmodified heuristic sieving algorithms heuristically approximate such a set (modulo sign). We provide experiments in low dimensions which support this theory. Finally we give some applications of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better

    Nearest neighbor decoding for Tardos fingerprinting codes

    No full text
    Over the past decade, various improvements have been made to Tardos' collusion-resistant fingerprinting scheme [Tardos, STOC 2003], ultimately resulting in a good understanding of what is the minimum code length required to achieve collusion-resistance. In contrast, decreasing the cost of the actual decoding algorithm for identifying the potential colluders has received less attention, even though previous results have shown that using joint decoding strategies, deemed too expensive for decoding, may lead to better code lengths. Moreover, in dynamic settings a fast decoder may be required to provide answers in real-time, further raising the question whether the decoding costs of score-based fingerprinting schemes can be decreased with a smarter decoding algorithm. In this paper we show how to model the decoding step of scorebased fingerprinting as a nearest neighbor search problem, and how this relation allows us to apply techniques from the field of (approximate) nearest neighbor searching to obtain decoding times which are sublinear in the total number of users. As this does not affect the encoding and embedding steps, this decoding mechanism can easily be deployed within existing fingerprinting schemes, and this may bring a truly efficient joint decoder closer to reality. Besides the application to fingerprinting, similar techniques can potentially be used to decrease the decoding costs of group testing methods, which may be of independent interest

    The irreducible vectors of a lattice: Some theory and applications

    No full text
    The main idea behind lattice sieving algorithms is to reduce a sufficiently large number of lattice vectors with each other so that a set of short enough vectors is obtained. It is therefore natural to study vectors which cannot be reduced. In this work we give a concrete definition of an irreducible vector and study the properties of the set of all such vectors. We show that the set of irreducible vectors is a subset of the set of Voronoi relevant vectors and study its properties. For extremal lattices this set may contain as many as 2^n vectors, which leads us to define the notion of a complete system of irreducible vectors, whose size can be upperbounded by the kissing number. One of our main results shows thatmodified heuristic sieving algorithms heuristically approximate such a set (modulo sign). We provide experiments in low dimensions which support this theory. Finally we give some applications of this set in the study of lattice problems such as SVP, SIVP and CVPP. The introduced notions, as well as various results derived along the way, may provide further insights into lattice algorithms and motivate new research into understanding these algorithms better

    Round5:compact and fast post-quantum public-key encryption

    Get PDF
    \u3cp\u3eWe present the ring-based configuration of the NIST submission Round5, a Ring Learning with Rounding (RLWR)- based IND-CPA secure public-key encryption scheme. It combines elements of the NIST candidates Round2 (use of RLWR as underlying problem, having 1 + x +... + x\u3csup\u3en\u3c/sup\u3e with n+1 prime as reduction polynomial, allowing for a large design space) and HILA5 (the constant-time error-correction code XEf). Round5 performs part of encryption, and decryption via multiplication in Zp[x]/(x\u3csup\u3en+1\u3c/sup\u3e−1), and uses secret-key polynomials that have a factor (x − 1). This technique reduces the failure probability and makes correlation in the decryption error negligibly low. The latter allows the effective application of error correction through XEf to further reduce the failure rate and shrink parameters, improving both security and performance. We argue for the security of Round5, both formal and concrete. We further analyze the decryption error, and give analytical as well as experimental results arguing that the decryption failure rate is lower than in Round2, with negligible correlation in errors. IND-CCA secure parameters constructed using Round5 and offering more than 232 and 256 bits of quantum and classical security respectively, under the conservative core sieving model, require only 2144 B of bandwidth. For comparison, similar, competing proposals require over 30% more bandwidth. Furthermore, the high flexilibity of Round5’s design allows choosing finely tuned parameters fitting the needs of diverse applications – ranging from the IoT to high-security levels.\u3c/p\u3

    Round5: compact and fast post-quantum public-key encryption

    No full text
    We present the ring-based configuration of the NIST submission Round5, a Ring Learning with Rounding (RLWR)- based IND-CPA secure public-key encryption scheme. It combines elements of the NIST candidates Round2 (use of RLWR as underlying problem, having 1 + x +... + xn with n+1 prime as reduction polynomial, allowing for a large design space) and HILA5 (the constant-time error-correction code XEf). Round5 performs part of encryption, and decryption via multiplication in Zp[x]/(xn+1−1), and uses secret-key polynomials that have a factor (x − 1). This technique reduces the failure probability and makes correlation in the decryption error negligibly low. The latter allows the effective application of error correction through XEf to further reduce the failure rate and shrink parameters, improving both security and performance. We argue for the security of Round5, both formal and concrete. We further analyze the decryption error, and give analytical as well as experimental results arguing that the decryption failure rate is lower than in Round2, with negligible correlation in errors. IND-CCA secure parameters constructed using Round5 and offering more than 232 and 256 bits of quantum and classical security respectively, under the conservative core sieving model, require only 2144 B of bandwidth. For comparison, similar, competing proposals require over 30% more bandwidth. Furthermore, the high flexilibity of Round5’s design allows choosing finely tuned parameters fitting the needs of diverse applications – ranging from the IoT to high-security levels
    corecore